Most camera lens systems are designed in isolation, separately from downstream computer vision methods. Recently, joint optimization approaches that design lenses alongside other components of the image acquisition and processing pipeline -- notably, downstream neural networks -- have achieved improved imaging quality or better performance on vision tasks. However, these existing methods optimize only a subset of lens parameters and cannot optimize glass materials given their categorical nature. In this work, we develop a differentiable spherical lens simulation model that accurately captures geometrical aberrations. We propose an optimization strategy to address the challenges of lens design -- notorious for non-convex loss function landscapes and many manufacturing constraints -- that are exacerbated in joint optimization tasks. Specifically, we introduce quantized continuous glass variables to facilitate the optimization and selection of glass materials in an end-to-end design context, and couple this with carefully designed constraints to support manufacturability. In automotive object detection, we show improved detection performance over existing designs even when simplifying designs to two- or three-element lenses, despite significantly degrading the image quality. Code and optical designs will be made publicly available.
translated by 谷歌翻译
We present a method for estimating lighting from a single perspective image of an indoor scene. Previous methods for predicting indoor illumination usually focus on either simple, parametric lighting that lack realism, or on richer representations that are difficult or even impossible to understand or modify after prediction. We propose a pipeline that estimates a parametric light that is easy to edit and allows renderings with strong shadows, alongside with a non-parametric texture with high-frequency information necessary for realistic rendering of specular objects. Once estimated, the predictions obtained with our model are interpretable and can easily be modified by an artist/user with a few mouse clicks. Quantitative and qualitative results show that our approach makes indoor lighting estimation easier to handle by a casual user, while still producing competitive results.
translated by 谷歌翻译
从数字艺术到AR和VR体验,图像编辑和合成已经变得无处不在。为了生产精美的复合材料,需要对相机进行几何校准,这可能很乏味,需要进行物理校准目标。代替传统的多图像校准过程,我们建议使用深层卷积神经网络直接从单个图像中直接从单个图像中推断摄像机校准参数,例如音高,滚动,视场和镜头失真。我们使用大规模全景数据集中自动生成样品训练该网络,从而在标准L2误差方面产生了竞争精度。但是,我们认为将这种标准误差指标最小化可能不是许多应用程序的最佳选择。在这项工作中,我们研究了人类对几何相机校准中不准确性的敏感性。为此,我们进行了一项大规模的人类感知研究,我们要求参与者以正确和有偏见的摄像机校准参数判断3D对象的现实主义。基于这项研究,我们为摄像机校准开发了一种新的感知度量,并证明我们的深校准网络在标准指标以及这一新型感知度量方面都优于先前基于单像的校准方法。最后,我们演示了将校准网络用于多种应用程序,包括虚拟对象插入,图像检索和合成。可以在https://lvsn.github.io/deepcalib上获得我们方法的演示。
translated by 谷歌翻译
我们提出了Panohdr-nerf,这是一种新颖的管道,可随意捕获大型室内场景的合理的全HDR辐射场,而无需精心设计或复杂的捕获协议。首先,用户通过在场景中自由挥舞现成的摄像头来捕获场景的低动态范围(LDR)全向视频。然后,LDR2HDR网络将捕获的LDR帧提升到HDR,随后用于训练定制的NERF ++模型。由此产生的Panohdr-NERF管道可以从场景的任何位置估算完整的HDR全景。通过在一个新的测试数据集上进行各种真实场景的实验,并在训练过程中未见的位置捕获了地面真相HDR辐射,我们表明PanoHDR-NERF可以预测任何场景点的合理辐射。我们还表明,PanoHDR-NERF产生的HDR图像可以合成正确的照明效果,从而可以使用正确点亮的合成对象来增强室内场景。
translated by 谷歌翻译
由于捕获的图像中的严重噪音,弱光下的场景推断是一个具有挑战性的问题。减少噪音的一种方法是在捕获过程中使用更长的曝光。但是,在有运动(场景或相机运动)的存在下,较长的暴露会导致运动模糊,从而导致图像信息的丢失。这在这两种图像降解之间创造了权衡取舍:运动模糊(由于长期暴露)与噪声(由于曝光短),也称为本文中的双图像损坏对。随着摄像机的兴起,能够同时捕获同一场景的多次暴露,因此可以克服这一权衡。我们的主要观察结果是,尽管这些不同图像捕获的降解的数量和性质各不相同,但在所有图像中,语义内容保持不变。为此,我们提出了一种方法,以利用这些多曝光捕获在弱光和运动下的鲁棒推理。我们的方法建立在功能一致性损失的基础上,以鼓励这些单个捕获的类似结果,并利用其最终预测的合奏来实现强大的视觉识别。我们证明了方法对模拟图像的有效性以及具有多个暴露的真实捕获,以及对象检测和图像分类的任务。
translated by 谷歌翻译
我们提出了一种从单个图像中推断360 {\ deg}视野的方法,该图像允许用户控制的综合外部绘制内容。为此,我们建议改进现有的基于GAN的镶嵌体系结构,以进行底漆全景图表。我们的方法获得了最先进的结果,并且优于标准图像质量指标的先前方法。为了允许受控的外部修饰的合成,我们引入了一个新型的指导共调整框架,该框架通过常见的鉴别模型驱动图像生成过程。这样做可以保持生成的全景图的高视觉质量,同时在推断的视野中启用用户控制的语义内容。我们在定性和定量上展示了我们方法的最新方法,从而提供了对我们新颖的编辑功能的彻底分析。最后,我们证明我们的方法受益于在照片中对高光泽对象的影片虚拟插入。
translated by 谷歌翻译
大多数图像到图像翻译方法需要大量的培训图像,这限制了他们的适用性。我们提出了清单:几次拍摄图像转换的框架,它仅从几个图像中了解目标域的上下文感知表示。要强制执行功能一致性,我们的框架会在源和代理锚域之间的样式歧管(假设由大量图像组成)之间。学习的歧管通过基于贴片的对策和特征统计对准损耗来插入并使少量射击目标域变形。所有这些组件在单端到端循环期间同时培训。除了普通的少量镜头翻译任务之外,我们的方法可以在单个示例图像上调节以再现其特定样式。广泛的实验证明了清单对多项任务的功效,优于所有度量的最先进,并且在基于一般和示例的情景中表现出最先进的。我们的代码将是开源的。
translated by 谷歌翻译
People are not very good at detecting lies, which may explain why they refrain from accusing others of lying, given the social costs attached to false accusations - both for the accuser and the accused. Here we consider how this social balance might be disrupted by the availability of lie-detection algorithms powered by Artificial Intelligence. Will people elect to use lie detection algorithms that perform better than humans, and if so, will they show less restraint in their accusations? We built a machine learning classifier whose accuracy (67\%) was significantly better than human accuracy (50\%) in a lie-detection task and conducted an incentivized lie-detection experiment in which we measured participants' propensity to use the algorithm, as well as the impact of that use on accusation rates. We find that the few people (33\%) who elect to use the algorithm drastically increase their accusation rates (from 25\% in the baseline condition up to 86% when the algorithm flags a statement as a lie). They make more false accusations (18pp increase), but at the same time, the probability of a lie remaining undetected is much lower in this group (36pp decrease). We consider individual motivations for using lie detection algorithms and the social implications of these algorithms.
translated by 谷歌翻译
We consider a model where a signal (discrete or continuous) is observed with an additive Gaussian noise process. The signal is issued from a linear combination of a finite but increasing number of translated features. The features are continuously parameterized by their location and depend on some scale parameter. First, we extend previous prediction results for off-the-grid estimators by taking into account here that the scale parameter may vary. The prediction bounds are analogous, but we improve the minimal distance between two consecutive features locations in order to achieve these bounds. Next, we propose a goodness-of-fit test for the model and give non-asymptotic upper bounds of the testing risk and of the minimax separation rate between two distinguishable signals. In particular, our test encompasses the signal detection framework. We deduce upper bounds on the minimal energy, expressed as the 2-norm of the linear coefficients, to successfully detect a signal in presence of noise. The general model considered in this paper is a non-linear extension of the classical high-dimensional regression model. It turns out that, in this framework, our upper bound on the minimax separation rate matches (up to a logarithmic factor) the lower bound on the minimax separation rate for signal detection in the high dimensional linear model associated to a fixed dictionary of features. We also propose a procedure to test whether the features of the observed signal belong to a given finite collection under the assumption that the linear coefficients may vary, but do not change to opposite signs under the null hypothesis. A non-asymptotic upper bound on the testing risk is given. We illustrate our results on the spikes deconvolution model with Gaussian features on the real line and with the Dirichlet kernel, frequently used in the compressed sensing literature, on the torus.
translated by 谷歌翻译
Like fingerprints, cortical folding patterns are unique to each brain even though they follow a general species-specific organization. Some folding patterns have been linked with neurodevelopmental disorders. However, due to the high inter-individual variability, the identification of rare folding patterns that could become biomarkers remains a very complex task. This paper proposes a novel unsupervised deep learning approach to identify rare folding patterns and assess the degree of deviations that can be detected. To this end, we preprocess the brain MR images to focus the learning on the folding morphology and train a beta-VAE to model the inter-individual variability of the folding. We compare the detection power of the latent space and of the reconstruction errors, using synthetic benchmarks and one actual rare configuration related to the central sulcus. Finally, we assess the generalization of our method on a developmental anomaly located in another region. Our results suggest that this method enables encoding relevant folding characteristics that can be enlightened and better interpreted based on the generative power of the beta-VAE. The latent space and the reconstruction errors bring complementary information and enable the identification of rare patterns of different nature. This method generalizes well to a different region on another dataset. Code is available at https://github.com/neurospin-projects/2022_lguillon_rare_folding_detection.
translated by 谷歌翻译